home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
ftp.cs.arizona.edu
/
ftp.cs.arizona.edu.tar
/
ftp.cs.arizona.edu
/
icon
/
newsgrp
/
group01a.txt
/
000033_icon-group-sender _Thu May 25 16:36:32 2000.msg
< prev
next >
Wrap
Internet Message Format
|
2002-01-03
|
10KB
Return-Path: <icon-group-sender>
Received: (from root@localhost)
by baskerville.CS.Arizona.EDU (8.9.1a/8.9.1) id QAA12383
for icon-group-addresses; Thu, 25 May 2000 16:34:42 -0700 (MST)
Message-Id: <200005252334.QAA12383@baskerville.CS.Arizona.EDU>
From: gep2@terabites.com
Date: Thu, 25 May 2000 17:07:40 -0500
Subject: Re: Hardware for HLLs; side helping of computer history
To: Icon-group@optima.CS.Arizona.EDU
Errors-To: icon-group-errors@optima.CS.Arizona.EDU
Status: RO
Content-Length: 9488
>>>> <gep2@terabites.com> 00-05-23 2:53:24 PM >>> writes:
> The market is limited; few people need the additional performance badly
enough to put up with the cost and configuration hassles
> Video cards, sound cards, DSP chipsets, ... . Some do find it useful to speed
up software by means of custom hardware, and are willing to pay for it.
True enough, for THOSE applications (where, in fact, you pretty much need the
hardware anyhow whether it has its own CPU capability or not). In the case of
Java, it's harder to justify *every* user paying for a hardware solution when
the *easier* and better solution is simple... just say NO to Java, and write the
software in a language that's inherently that much more efficient (or more) to
begin with. :-)
> Of course, that by itself is no guarantee of financial success for companies
supplying such hardware.
Absolutely. I think it's a safe guess that the company developing such a
specialized Java hardware processor would fail (and probably after investing a
LOT of money in development). Even Sun, which apparently HAS such a processor
developed, isn't exactly racing to market with it.
> As for configuration hassles, well, a cynic would say that's what generic PCs
are all about.
I'm not THAT cynical. It's more an issue of "what kind of slot would you plug
the board into?" and "do you HAVE a spare slot of that type?" etc etc. And of
course, these "legacy-free PCs" and other sealed-box "PCs" that some dumb
companies keep trying to promote would simply not allow adding such a board,
anyhow.
>> Given the limited money available to support the development, you'll probably
never be able to keep up in the throughput technology curve with the
general-purpose, mass-market processors.
> Money is always limited, which is precisely why people must always ask what is
the correct way to do things. I would frame the following general question:
Where, disregarding historical accidents and existing market factors, is the
line properly drawn between hardware and software?
Well, I'd give an example which I think clearly errs in terms of putting too
much in software: these stupid "Winmodems" where they replace a $2.00 dedicated
processor on the modem card with 30-50% of a $200 processor...
>> Eventually (and probably SOON) your balls-out high-speed
> specialized custom processor will be blown away by a simple interpreter
running on cheap generic iron.
> True, if it is not on the Intel learning curve.
Or tha AMD learning curve, or the Cyrix learning curve, or the IBM learning
curve, or one of the other such curves. And it's not even an issue of
"learning", it's an issue of how much money you have to keep developing more and
more complex and sophisticated versions to keep up. If the market for a Java
processor is limited to begin with, I think there's not going to be enough money
there to stay competitive (if there is even enough money to get a competitive
product to market to begin with).
>> If you go back about 7-8 years there was a lot of press about how Sun's
revolutionary
> SPARC RISC processor was going to put the Intel-family CISC processors out of
> business, and redefine the PC. It hasn't really happened that way, has it?
> I won't comment on the RISC/CISC wars---not smart enough---but I think your
point is that the chip fab technology is capital-intensive, that Intel has the
most capital, and therefore nobody but Intel could succeed at it today.
Certainly there is an *enormous* installed base of software and hardware, which
is not going to disappear overnight. Like in the case of the mainframe wars of
the 60's and 70's, even companies like Amdahl and Memorex developing
"compatible" mainframes wasn't enough to dislodge IBM... it took a major change
of paradigm (the PC combined with the LAN that allowed clustering a nearly
arbitrarily large bunch of them around one or more shared databases) that simply
made the old-style mainframe largely irrelevant... to pretty much consign those
monsters to history.
I think that Intel (who _ought_ to understand that lesson well) is making a
*major* strategic mistake in the incompatible architecture of their
next-generation processors... one which is likely to leave AMD with a much
larger piece of the pie than they presently have.
> I somewhat forlornly agree (and would hope to be shown wrong). This is the
money issue again.
I think you'll see a lot of advance by AMD into Intel's current territory. This
will be the biggest goof by Intel since their braindead CPU-serial-number thing
they built into the Pentium III.
> It's understandable why Intel chose the 4004/8008 CPU model over something
like the Burroughs B5500/B6700 model in the year 197X---it wasn't possible to
put a mainframe CPU on a controller chip then.
There's actually an interesting story there. Turns out that Intel designed the
4004, but a company called Computer Terminal Corporation in San Antonio (later
Datapoint Corporation) developed the architecture and instruction set for the
8008... Intel was one of several companies which was approached to build it for
them, and in fact both TI and Intel made (sorta) working prototypes. Intel did
it very reluctantly, because they were convinced that there WAS no market for a
general-purpose microcomputer-on-a-chip. The only reason they agreed to build
the part at all was because Intel back then considered themselves a MEMORY
company, and Computer Terminal Corporation was at the time the world's largest
buyer of MOS memory. So Intel was hoping to cement the loyalty (and memory
business) of Datapoint by building this "foolish" CPU chip for them. As it
turned out, the TI chip was electrically noisy enough that it barely worked at
all, and by the time Intel's was finally ready Datapoint had already advanced
enough on their own temporary design that the LSI part wasn't really all that
attractive. Datapoint signed away to Intel the rights to the design basically
in exchange for being relieved of the obligation to buy it from them, and Intel
(which had meanwhile leaked rumors about the 8008 project to other customers,
and found that there indeed MIGHT be a market for a microcomputer-on-a-chip
after all) went ahead to sell it for their own account. The rest, as they say,
is history. :-)
Datapoint, of course, is also the company that in 1976 developed the first
commercially successful local area network (hardware and software delivered to
first customer in Sept 1977 and announced in Dec 1977) and together with the
cheap/powerful single-chip microprocessor that Datapoint also had invented,
these two products (and their successors) later merged to create the world of
computing that we have today.
>> (You will remember that the whole raison d'etre of RISC was that by designing
a brain-damaged processor with a crippled primitive instruction set, they could
turn the crank faster...)
> To me, it's not so much seeing who's faster as simplifying life by replacing
multiple software layers with silicon hardware designs informed by enduring
software principles, i. e. HLL hardware.
I think it's basically a strategic error to presume that "the solution" is to
try to make one single processor go infinitely fast. I think a much better
solution (and this is part of what Datapoint's (and my as primary developer of
the software for it) LAN concept was about) is to try to eliminate the
horsepower race BY DESIGN. To produce simple, to-be-cheap modules and to
arrange things to maximize the potential fanout... so you can easily cluster as
much processing horsepower units around a given database as its load requires
(regardless of how many individual processors that means).
Meanwhile, I think it's much easier (and faster and cheaper to develop) to
manage complexity in software than in hardware.
> Once it's in silicon, it will be on the same chip-fab curve that the current
Intel processors are on (especially if Intel did our hypothetical HLL chip and
could make it a market success).
It's not just fab technology, though. If it was, we'd still be making
hugely-fast 8008's. To get the speed up, they've developed *hugely* more
complicated and expensive processor designs, at enormous cost (although clearly
the market has repaid that investment many times over).
> I do share your scepticism about the RISC concept. I was never really
convinced that the RISC designs I used to see in Byte magazine articles would be
intrinsically better for general purpose computing than, say, the
Shannon-encoded stack machine that Tanenbaum (I believe it was) designed, _if_
both were designed as rippin' fast hardware.
Absolutely. If you want performance, it seems pretty obvious to ME that putting
the subroutine or macro to execute a complex series of microinstructions RIGHT
ON THE DIE makes a lot more sense than to force the processor to fetch the
(maybe dozens of) RISC instructions from main memory to accomplish the same
thing.
And I think that a good part of what eventually put Byte Magazine out of the
business was their having made SO MANY bad calls through the 80's and early
90's.
Gordon Peterson
http://web2.airmail.net/gep2/
Support the Anti-SPAM Amendment! Join at http://www.cauce.org/
12/19/98: the day the Conservatives demonstrated their scorn for their
fraudulent sham of representative government. Voters, remember it!